Implementations of asynchronous self-organizing maps on OpenMP and MPI parallel computers
نویسندگان
چکیده
In [1], we presented an asynchronous parallel algorithm for self-organizing maps based on a recently defined energy function which leads to a self-organizing map. We generalized the existing stochastic gradient approach to an asynchronous parallel stochastic gradient method for generating a topological map on a distributed computer system (MIMD). We theoretically proved that our algorithm was convergent and the simulations showed our algorithm was effective. In this paper, we implement this algorithm on practical parallel computers with two different types: openMP and MPI, in the Supercomputing Institution at University of Minnesota. By analyzing the experimental results, we demonstrate the convergence, efficiency and speed-up of our algorithm.
منابع مشابه
Parallel computing using MPI and OpenMP on self-configured platform, UMZHPC.
Parallel computing is a topic of interest for a broad scientific community since it facilitates many time-consuming algorithms in different application domains.In this paper, we introduce a novel platform for parallel computing by using MPI and OpenMP programming languages based on set of networked PCs. UMZHPC is a free Linux-based parallel computing infrastructure that has been developed to cr...
متن کاملA Practical Method to Implement Asynchronous Iterative Algorithms on MPI and a Case Study for Asynchronous Self-Organizing Maps
In this paper, an effective implementation scheme for asynchronous parallel iterative algorithms on messagepassing systems using MPI non-blocking communication model is proposed. The main idea of the method is to use a MPI_IPROBE function to check for the existence of pending messages without receiving them, thereby allowing us to write programs that interleave local computation with the proces...
متن کاملPerformance analysis of asynchronous Jacobi's method implemented in MPI, SHMEM and OpenMP
Ever-increasing core counts create the need to develop parallel algorithms that avoid closelycoupled execution across all cores. In this paper we present performance analysis of several parallel asynchronous implementations of Jacobi’s method for solving systems of linear equations, using MPI, SHMEM and OpenMP. In particular we have solved systems of over 4 billion unknowns using up to 32,768 p...
متن کاملProspects for Truly Asynchronous Communication with Pure MPI and Hybrid MPI/OpenMP on Current Supercomputing Platforms
We investigate the ability of MPI implementations to perform truly asynchronous communication with nonblocking point-to-point calls on current highly parallel systems, including the Cray XT and XE series. For cases where no automatic overlap of communication with computation is available, we demonstrate several different ways of establishing explicitly asynchronous communication by variants of ...
متن کاملHybrid Programming with OpenMP and MPI
The basic aims of parallel programming are to decrease the runtime for the solution to a problem and increase the size of the problem that can be solved. The conventional parallel programming practices involve a a pure OpenMP implementation on a shared memory architecture (Fig. 1) or a pure MPI implementation on distributed memory computer architectures (Fig. 2). The largest and fastest compute...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2007